Skip to content

8185 test refactor 2 #8405

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 67 commits into
base: dev
Choose a base branch
from

Conversation

garciadias
Copy link
Contributor

@garciadias garciadias commented Mar 28, 2025

Fixes #8185

Description

This PR solves items 2 and 3 on #8185 for a few test folders.
I would merge these and proceed with the same type of change in other files if @ericspod approves.

I would like to keep these PRs small, so even if they have the same pattern of changes, merging them bit by bit would make them more manageable.

A few sentences describing the changes proposed in this pull request.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@garciadias
Copy link
Contributor Author

Hi @ericspod,

Could you please review this PR?
I think it is all good to follow your specification on the #8185 issue.
I am sending the modification to these few tests so we can limit the size of the PR.
If you approve, I will proceed with more of these simplifications.

garciadias and others added 22 commits May 9, 2025 17:52
…or cleaner test generation

Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Co-authored-by: Eric Kerfoot <[email protected]>
Signed-off-by: Rafael Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
@garciadias garciadias force-pushed the 8185-test-refactor-2 branch from 5ae6312 to 2edf166 Compare May 16, 2025 12:31
garciadias and others added 5 commits May 16, 2025 13:41
I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: 241e24c
I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: 2edf166
I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: 2430ac8
I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: bb54c57
I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: cd1b4fb

Signed-off-by: R. Garcia-Dias <[email protected]>
@KumoLiu KumoLiu self-requested a review May 23, 2025 14:17
Copy link
Member

@ericspod ericspod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @garciadias it looks much better now compared to the current state of these test. I have made a few comments in places which are mostly code suggestions you can just accept and double check are correct, but in general it's good but there's a few places where fewer lines of code can be used or some things adjusted for clarity. Thanks!

Comment on lines +27 to +40
TEST_CASES_CAB = [
[
{
"spatial_dims": params["spatial_dims"],
"dim": params["dim"],
"num_heads": params["num_heads"],
"bias": params["bias"],
"flash_attention": False,
},
(2, params["dim"], *([16] * params["spatial_dims"])),
(2, params["dim"], *([16] * params["spatial_dims"])),
]
for params in dict_product(spatial_dims=[2, 3], dim=[32, 64, 128], num_heads=[2, 4, 8], bias=[True, False])
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TEST_CASES_CAB = [
[
{
"spatial_dims": params["spatial_dims"],
"dim": params["dim"],
"num_heads": params["num_heads"],
"bias": params["bias"],
"flash_attention": False,
},
(2, params["dim"], *([16] * params["spatial_dims"])),
(2, params["dim"], *([16] * params["spatial_dims"])),
]
for params in dict_product(spatial_dims=[2, 3], dim=[32, 64, 128], num_heads=[2, 4, 8], bias=[True, False])
]
TEST_CASES_CAB = [
[
{**params, "flash_attention": False},
(2, params["dim"], *([16] * params["spatial_dims"])),
(2, params["dim"], *([16] * params["spatial_dims"])),
]
for params in dict_product(spatial_dims=[2, 3], dim=[32, 64, 128], num_heads=[2, 4, 8], bias=[True, False])
]

We can reduce this a little more with Python ** syntax.

Comment on lines +29 to +50
TEST_CASE_CABLOCK = [
[
{
"hidden_size": params["hidden_size"],
"num_heads": params["num_heads"],
"dropout_rate": params["dropout_rate"],
"rel_pos_embedding": params["rel_pos_embedding_val"] if not params["flash_attn"] else None,
"input_size": params["input_size"],
"use_flash_attention": params["flash_attn"],
},
(2, 512, params["hidden_size"]),
(2, 512, params["hidden_size"]),
]
for params in dict_product(
dropout_rate=np.linspace(0, 1, 4),
hidden_size=[360, 480, 600, 768],
num_heads=[4, 6, 8, 12],
rel_pos_embedding_val=[None, RelPosEmbedding.DECOMPOSED],
input_size=[(16, 32), (8, 8, 8)],
flash_attn=[True, False],
)
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TEST_CASE_CABLOCK = [
[
{
"hidden_size": params["hidden_size"],
"num_heads": params["num_heads"],
"dropout_rate": params["dropout_rate"],
"rel_pos_embedding": params["rel_pos_embedding_val"] if not params["flash_attn"] else None,
"input_size": params["input_size"],
"use_flash_attention": params["flash_attn"],
},
(2, 512, params["hidden_size"]),
(2, 512, params["hidden_size"]),
]
for params in dict_product(
dropout_rate=np.linspace(0, 1, 4),
hidden_size=[360, 480, 600, 768],
num_heads=[4, 6, 8, 12],
rel_pos_embedding_val=[None, RelPosEmbedding.DECOMPOSED],
input_size=[(16, 32), (8, 8, 8)],
flash_attn=[True, False],
)
]
TEST_CASE_CABLOCK = [
[
{
**params,
"rel_pos_embedding": params["rel_pos_embedding_val"] if not params["flash_attn"] else None,
},
(2, 512, params["hidden_size"]),
(2, 512, params["hidden_size"]),
]
for params in dict_product(
dropout_rate=np.linspace(0, 1, 4),
hidden_size=[360, 480, 600, 768],
num_heads=[4, 6, 8, 12],
rel_pos_embedding_val=[None, RelPosEmbedding.DECOMPOSED],
input_size=[(16, 32), (8, 8, 8)],
use_flash_attention=[True, False],
)
]

Also this should work.

Comment on lines +58 to +89
for params in dict_product(
spatial_dims=range(2, 4),
kernel_size=[1, 3],
stride=[1, 2],
norm_name=["batch", "instance"],
in_size=[15, 16],
trans_bias=[True, False],
):
spatial_dims = params["spatial_dims"]
kernel_size = params["kernel_size"]
stride = params["stride"]
norm_name = params["norm_name"]
in_size = params["in_size"]
trans_bias = params["trans_bias"]

out_size = in_size * stride
test_case = [
{
"spatial_dims": spatial_dims,
"in_channels": in_channels,
"out_channels": out_channels,
"kernel_size": kernel_size,
"norm_name": norm_name,
"stride": stride,
"upsample_kernel_size": stride,
"trans_bias": trans_bias,
},
(1, in_channels, *([in_size] * spatial_dims)),
(1, out_channels, *([out_size] * spatial_dims)),
(1, out_channels, *([in_size * stride] * spatial_dims)),
]
TEST_UP_BLOCK.append(test_case)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for params in dict_product(
spatial_dims=range(2, 4),
kernel_size=[1, 3],
stride=[1, 2],
norm_name=["batch", "instance"],
in_size=[15, 16],
trans_bias=[True, False],
):
spatial_dims = params["spatial_dims"]
kernel_size = params["kernel_size"]
stride = params["stride"]
norm_name = params["norm_name"]
in_size = params["in_size"]
trans_bias = params["trans_bias"]
out_size = in_size * stride
test_case = [
{
"spatial_dims": spatial_dims,
"in_channels": in_channels,
"out_channels": out_channels,
"kernel_size": kernel_size,
"norm_name": norm_name,
"stride": stride,
"upsample_kernel_size": stride,
"trans_bias": trans_bias,
},
(1, in_channels, *([in_size] * spatial_dims)),
(1, out_channels, *([out_size] * spatial_dims)),
(1, out_channels, *([in_size * stride] * spatial_dims)),
]
TEST_UP_BLOCK.append(test_case)
param_dicts = dict_product(
spatial_dims=range(2, 4),
kernel_size=[1, 3],
upsample_kernel_size=[1, 2],
norm_name=["batch", "instance"],
in_size=[15, 16],
trans_bias=[True, False],
)
for params in param_dicts:
spatial_dims = params["spatial_dims"]
stride = params["upsample_kernel_size"]
in_size = params.pop("in_size") # don't want in_size in the dictionary below
out_size = in_size * stride
test_case = [
{**params, "in_channels": in_channels, "out_channels": out_channels},
(1, in_channels, *([in_size] * spatial_dims)),
(1, out_channels, *([out_size] * spatial_dims)),
(1, out_channels, *([in_size * stride] * spatial_dims)),
]
TEST_UP_BLOCK.append(test_case)

I think this is equivalent and a little smaller.

Comment on lines +21 to 41
from tests.test_utils import dict_product

TEST_CASE_RESBLOCK = [
[
{
"spatial_dims": params["spatial_dims"],
"in_channels": params["in_channels"],
"kernel_size": params["kernel_size"],
"norm": params["norm"],
},
(2, params["in_channels"], *([16] * params["spatial_dims"])),
(2, params["in_channels"], *([16] * params["spatial_dims"])),
]
for params in dict_product(
spatial_dims=range(2, 4),
in_channels=range(1, 4),
kernel_size=[1, 3],
norm=[("group", {"num_groups": 1}), "batch", "instance"],
)
]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
from tests.test_utils import dict_product
TEST_CASE_RESBLOCK = [
[
{
"spatial_dims": params["spatial_dims"],
"in_channels": params["in_channels"],
"kernel_size": params["kernel_size"],
"norm": params["norm"],
},
(2, params["in_channels"], *([16] * params["spatial_dims"])),
(2, params["in_channels"], *([16] * params["spatial_dims"])),
]
for params in dict_product(
spatial_dims=range(2, 4),
in_channels=range(1, 4),
kernel_size=[1, 3],
norm=[("group", {"num_groups": 1}), "batch", "instance"],
)
]
from tests.test_utils import dict_product
TEST_CASE_RESBLOCK = [
[
params,
(2, params["in_channels"], *([16] * params["spatial_dims"])),
(2, params["in_channels"], *([16] * params["spatial_dims"])),
]
for params in dict_product(
spatial_dims=range(2, 4),
in_channels=range(1, 4),
kernel_size=[1, 3],
norm=[("group", {"num_groups": 1}), "batch", "instance"],
)
]

Comment on lines +27 to +46
TEST_CASE_TRANSFORMERBLOCK = [
[
{
"hidden_size": params["hidden_size"],
"num_heads": params["num_heads"],
"mlp_dim": params["mlp_dim"],
"dropout_rate": params["dropout_rate"],
"with_cross_attention": params["with_cross_attention"],
},
(2, 512, params["hidden_size"]),
(2, 512, params["hidden_size"]),
]
for params in dict_product(
dropout_rate=np.linspace(0, 1, 4),
hidden_size=[360, 480, 600, 768],
num_heads=[4, 8, 12],
mlp_dim=[1024, 3072],
with_cross_attention=[False, True],
)
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TEST_CASE_TRANSFORMERBLOCK = [
[
{
"hidden_size": params["hidden_size"],
"num_heads": params["num_heads"],
"mlp_dim": params["mlp_dim"],
"dropout_rate": params["dropout_rate"],
"with_cross_attention": params["with_cross_attention"],
},
(2, 512, params["hidden_size"]),
(2, 512, params["hidden_size"]),
]
for params in dict_product(
dropout_rate=np.linspace(0, 1, 4),
hidden_size=[360, 480, 600, 768],
num_heads=[4, 8, 12],
mlp_dim=[1024, 3072],
with_cross_attention=[False, True],
)
]
TEST_CASE_TRANSFORMERBLOCK = [
[
params,
(2, 512, params["hidden_size"]),
(2, 512, params["hidden_size"]),
]
for params in dict_product(
dropout_rate=np.linspace(0, 1, 4),
hidden_size=[360, 480, 600, 768],
num_heads=[4, 8, 12],
mlp_dim=[1024, 3072],
with_cross_attention=[False, True],
)
]

Comment on lines +35 to +55
TEST_CASE_UNETR_BASIC_BLOCK = [
[
{
"spatial_dims": params["spatial_dims"],
"in_channels": 16,
"out_channels": 16,
"kernel_size": params["kernel_size"],
"norm_name": params["norm_name"],
"stride": params["stride"],
},
(1, 16, *([params["in_size"]] * params["spatial_dims"])),
(1, 16, *([_get_out_size(params)] * params["spatial_dims"])),
]
for params in dict_product(
spatial_dims=range(1, 4),
kernel_size=[1, 3],
stride=[2],
norm_name=[("GROUP", {"num_groups": 16}), ("batch", {"track_running_stats": False}), "instance"],
in_size=[15, 16],
)
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TEST_CASE_UNETR_BASIC_BLOCK = [
[
{
"spatial_dims": params["spatial_dims"],
"in_channels": 16,
"out_channels": 16,
"kernel_size": params["kernel_size"],
"norm_name": params["norm_name"],
"stride": params["stride"],
},
(1, 16, *([params["in_size"]] * params["spatial_dims"])),
(1, 16, *([_get_out_size(params)] * params["spatial_dims"])),
]
for params in dict_product(
spatial_dims=range(1, 4),
kernel_size=[1, 3],
stride=[2],
norm_name=[("GROUP", {"num_groups": 16}), ("batch", {"track_running_stats": False}), "instance"],
in_size=[15, 16],
)
]
norm_names = [("GROUP", {"num_groups": 16}), ("batch", {"track_running_stats": False}), "instance"]
param_dicts = dict_product(
spatial_dims=range(1, 4), kernel_size=[1, 3], stride=[2], norm_name=norm_names, in_size=[15, 16]
)
TEST_CASE_UNETR_BASIC_BLOCK = []
for params in param_dicts:
in_size = params.pop("in_size")
input_param = {**params, "in_channels": 16, "out_channels": 16}
input_shape = (1, 16, *([params["in_size"]] * params["spatial_dims"]))
expected_shape = (1, 16, *([_get_out_size(params)] * params["spatial_dims"]))
TEST_CASE_UNETR_BASIC_BLOCK.append([input_param, input_shape, expected_shape])

Breaking up what you're doing into pieces in different ways can reduce the number of lines used and I think improves readability. The mutli-line for expression often is fine for readability but creating a norm_names variable and moving definitions around reduces the number of lines used which improves things, and explicit variable names for the components of the test is clearer.

Comment on lines +26 to 32
TESTS = [
(params["keepdim"], params["p"], params["update_meta"], params["list_output"])
for params in dict_product(
p=TEST_NDARRAYS, keepdim=[True, False], update_meta=[True, False], list_output=[True, False]
)
]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TESTS = [
(params["keepdim"], params["p"], params["update_meta"], params["list_output"])
for params in dict_product(
p=TEST_NDARRAYS, keepdim=[True, False], update_meta=[True, False], list_output=[True, False]
)
]
TESTS = list(dict_product(keepdim=[True, False], p=TEST_NDARRAYS, update_meta=[True, False], list_output=[True, False]))

You might not even need list(...) here.

Comment on lines +78 to +100
TESTS.extend(
[
[
np.arange(4).reshape((1, 2, 2)) + 1.0, # data
*params["device"],
dst,
{
"dst_keys": "dst_affine",
"dtype": params["dtype"],
"align_corners": params["align"],
"mode": params["interp_mode"],
"padding_mode": "zeros",
},
expct,
]
for params in dict_product(
device=TEST_DEVICES,
align=[False, True],
dtype=[torch.float32, torch.float64],
interp_mode=["nearest", "bilinear"],
)
]
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TESTS.extend(
[
[
np.arange(4).reshape((1, 2, 2)) + 1.0, # data
*params["device"],
dst,
{
"dst_keys": "dst_affine",
"dtype": params["dtype"],
"align_corners": params["align"],
"mode": params["interp_mode"],
"padding_mode": "zeros",
},
expct,
]
for params in dict_product(
device=TEST_DEVICES,
align=[False, True],
dtype=[torch.float32, torch.float64],
interp_mode=["nearest", "bilinear"],
)
]
)
TESTS += [
[
np.arange(4).reshape((1, 2, 2)) + 1.0, # data
*params.pop("device"),
dst,
{**params, "dst_keys": "dst_affine", "padding_mode": "zeros"},
expct,
]
for params in dict_product(
device=TEST_DEVICES,
align=[False, True],
dtype=[torch.float32, torch.float64],
interp_mode=["nearest", "bilinear"],
)
]

I think this works?

upsample_mode=list(UpsampleMode),
vae_estimate_std=[True, False],
)
]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here again the number of lines can be reduced by defining dictionaries with {**params, something_else: 1,...} pattern.

Comment on lines +38 to +39
**({"spatial_dims": 2} if params["nd"] == 2 else {}),
**({"post_activation": False} if params["nd"] == 2 and params["classification"] else {}),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this usage of ** with a bracketed expression isn't so readable, so this whole statements may be better implemented with a for-loop so you can construct the dictionary with actual if statements. You can also use the {**params,...} pattern here to reduce the line count a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Test Refactor
2 participants